-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor of Refresh & Recovery procedures #158
Conversation
Technically, this helper function creates a random polynomial where the `root` parameter is a root of the polynomial, so `make_random_polynomial_with_root` is a better name.
There was no function for the refresh case, and since it's very similar to the recovery case, we define here a common method for both (`prepare_share_updates_with_root`)
The concept of "threshold" doesn't belong to the abstraction level of polynomials, which actually deal with degrees. For the record, it's always the case that `threshold == degree + 1`.
Test using secret reconstruction directly, rather than via ciphertext decryption.
These functions assumed access to an update polynomial, but we're actually using several distributed polynomials (one for each participant) and we only have access to their evaluated points. Note that the commits before did the preparation to remove this now.
Updates for shares were not applied correctly (see diff in L418-423). For the moment, we're recovering the same domain point since it facilitates debugging, but in the next commit I'll restore the test so there's a new validator, with a random domain point.
Codecov Report
@@ Coverage Diff @@
## main #158 +/- ##
==========================================
+ Coverage 77.87% 77.94% +0.06%
==========================================
Files 23 23
Lines 5031 5051 +20
==========================================
+ Hits 3918 3937 +19
- Misses 1113 1114 +1
|
* Removed benchmark for random polynomial with root (not much value as it's essentially benchmarking arkworks) * Commented recovery & refresh benchmarks for when #162 is solved
@@ -419,28 +430,37 @@ mod test_dkg_full { | |||
.decryption_key; | |||
|
|||
// Creates updated private key shares | |||
// TODO: Why not using dkg.aggregate()? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIRC dkg.aggregate
would throw if we didn't feed enough transcripts into it.
dkg
has an internal state machine that tracks the progress of the DKG ritual, and we would have to strictly adhere to its protocol in order to successfully aggregate
here. Since refreshing/recovery is not implemented in the said state machine, it's easier to skip different DKG states and just call aggregate
directly here.
let deltas_i = prepare_share_updates_for_refresh::<E>( | ||
&domain_points, | ||
&dkg.pvss_params.h.into_affine(), | ||
dkg.dkg_params.security_threshold as usize, | ||
rng, | ||
); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a comment on the subject of the future refactor: We can see how this could be rewritten as let deltas_i = dkg.prepare_share_updates_for_refresh(rng);
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Absolutely
ferveo/src/lib.rs
Outdated
DecryptionShareSimple::create( | ||
&validator_keypair.decryption_key, | ||
&updated_shares.get(share_index).unwrap(), | ||
&ciphertext.header().unwrap(), | ||
aad, | ||
&dkg.pvss_params.g_inv(), | ||
) | ||
.unwrap() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not specific to this PR but to the future refactor:
Perhaps we could avoid this section by creating a DecryptionShareSimple
in pvss_aggregated.update_private_key_share_for_recovery
? I think there is no other reason for moving updated_shares
around other than eventually creating a DecryptionShareSimple
, so we could collapse this logic into one step. Naming TBD.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not really, updating private shares and creating decryption shares are absolutely independent. We create decryption shares here since it's an acceptance test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're right, this is completely the other way around - The nucypher
usage of ferveo
API confirms that it happens at two different time steps. Ursulas would never update their shares & create a decryption share at the same time, these are two completely different user flows.
use rand_core::RngCore; | ||
use tpke::{lagrange_basis_at, PrivateKeyShare}; | ||
|
||
// SHARE UPDATE FUNCTIONS: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could always put those functions into a module if we think it would help organize them better:
pub mod share_update {
pub fn prepare_share_updates_for_recover ...
pub fn apply_updates_to_private_share ...
}
// etc.
But on second thought, these sections will be different by the time we're done with the refactoring.
Anyway, just throwing it out there - It's an option.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh that's a nice idea, I didn't know that. Let's consider that for the next refactor.
debug_assert!(poly.evaluate(root) == E::ScalarField::zero()); | ||
debug_assert!(poly.coeffs.len() == degree + 1); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a comment on this macro usage:
debug_assert!
doesn't show up in a release
target, i.e. this assertion would be removed in production. If we want this assertion to actually throw in production, we should use assert!
instead. I didn't think it would be necessary, so I opted for the former.
// Finally, let's recreate the shared private key from the refreshed shares | ||
let new_shared_private_key = recover_share_from_updated_private_shares( | ||
&ScalarField::zero(), | ||
&domain_points[..threshold], | ||
&refreshed_shares[..threshold], | ||
); | ||
|
||
assert_eq!( | ||
shared_private_key, | ||
new_shared_private_key.private_key_share | ||
); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could test the red path here (not enough shares) similarly as we do in tdec_simple_variant_share_recovery_at_selected_point
&domain_points[..(threshold - 1)], | ||
&new_share_fragments[..(threshold - 1)], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could throw in an assert!
here to make sure the domain_points
and new_share_fragments
have a proper length. It's not strictly necessary, but it relates to some of the original issues we had with those tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 🎉
Just some comments, but nothing blocking.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🎸 - a bit intricate for me to provide a comprehensive review, but looks good.
Type of PR:
Required reviews:
What this does:
This PR is mainly a refactor of code related to refresh & recovery procedures. The main contributions are:
tpke
crate to theferveo
crateIssues fixed/closed:
Why it's needed:
This work is the basis to implementing the recovery & refresh protocols (see #163)
Notes for reviewers:
No breaking changes!
Work in this PR has surfaced these issues: